skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Van Hentenryck, Pascal"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper studies how to train machine-learning models that directly approximate the optimal solutions of constrained optimization problems. This is an empirical risk minimization under constraints, which is challenging as training must balance optimality and feasibility conditions. Supervised learning methods often approach this challenge by training the model on a large collection of pre-solved instances. This paper takes a different route and proposes the idea of Primal-Dual Learning (PDL), a self-supervised training method that does not require a set of pre-solved instances or an optimization solver for training and inference. Instead, PDL mimics the trajectory of an Augmented Lagrangian Method (ALM) and jointly trains primal and dual neural networks. Being a primal-dual method, PDL uses instance-specific penalties of the constraint terms in the loss function used to train the primal network. Experiments show that, on a set of nonlinear optimization benchmarks, PDL typically exhibits negligible constraint violations and minor optimality gaps, and is remarkably close to the ALM optimization. PDL also demonstrated improved or similar performance in terms of the optimality gaps, constraint violations, and training times compared to existing approaches. 
    more » « less
  2. This paper studies how to integrate rider mode preferences into the design of on-demand multimodal transit systems (ODMTSs). It is motivated by a common worry in transit agencies that an ODMTS may be poorly designed if the latent demand, that is, new riders adopting the system, is not captured. This paper proposes a bilevel optimization model to address this challenge, in which the leader problem determines the ODMTS design, and the follower problems identify the most cost efficient and convenient route for riders under the chosen design. The leader model contains a choice model for every potential rider that determines whether the rider adopts the ODMTS given her proposed route. To solve the bilevel optimization model, the paper proposes an exact decomposition method that includes Benders optimal cuts and no-good cuts to ensure the consistency of the rider choices in the leader and follower problems. Moreover, to improve computational efficiency, the paper proposes upper and lower bounds on trip durations for the follower problems, valid inequalities that strengthen the no-good cuts, and approaches to reduce the problem size with problem-specific preprocessing techniques. The proposed method is validated using an extensive computational study on a real data set from the Ann Arbor Area Transportation Authority, the transit agency for the broader Ann Arbor and Ypsilanti region in Michigan. The study considers the impact of a number of factors, including the price of on-demand shuttles, the number of hubs, and access to transit systems criteria. The designed ODMTSs feature high adoption rates and significantly shorter trip durations compared with the existing transit system and highlight the benefits of ensuring access for low-income riders. Finally, the computational study demonstrates the efficiency of the decomposition method for the case study and the benefits of computational enhancements that improve the baseline method by several orders of magnitude. Funding: This research was partly supported by National Science Foundation [Leap HI Proposal NSF-1854684] and the Department of Energy [Research Award 7F-30154]. 
    more » « less
  3. AC Optimal Power Flow (AC-OPF) is a fundamental building block in power system optimization. It is often solved repeatedly, especially in regions with large penetration of renewable generation, to avoid violating operational limits. Recent work has shown that deep learning can be effective in providing highly accurate approximations of AC-OPF. However, deep learning approaches may suffer from scalability issues, especially when applied to large realistic grids. This paper addresses these scalability limitations and proposes a load embedding scheme using a 3-step approach. The first step formulates the load embedding problem as a bilevel optimization model that can be solved using a penalty method. The second step learns the encoding optimization to quickly produce load embeddings for new OPF instances. The third step is a deep learning model that uses load embeddings to produce accurate AC-OPF approximations. The approach is evaluated experimentally on large-scale test cases from the NESTA library. The results demonstrate that the proposed approach produces an order of magnitude improvements in training convergence and prediction accuracy. 
    more » « less
  4. Idle vehicle relocation is crucial for addressing demand-supply imbalance that frequently arises in the ride-hailing system. Current mainstream methodologies - optimization and reinforcement learning - suffer from obvious computational drawbacks. Optimization models need to be solved in real-time and often trade off model fidelity (hence quality of solutions) for computational efficiency. Reinforcement learning is expensive to train and often struggles to achieve coordination among a large fleet. This paper designs a hybrid approach that leverages the strengths of the two while overcoming their drawbacks. Specifically, it trains an optimization proxy, i.e., a machine-learning model that approximates an optimization model, and then refines the proxy with reinforcement learning. This Reinforcement Learning from Optimization Proxy (RLOP) approach is computationally efficient to train and deploy, and achieves better results than RL or optimization alone. Numerical experiments on the New York City dataset show that the RLOP approach reduces both the relocation costs and computation time significantly compared to the optimization model, while pure reinforcement learning fails to converge due to computational complexity. 
    more » « less